Toward Ethical Robots via Mechanized Deontic Logic
نویسندگان
چکیده
We suggest that mechanized multi-agent deontic logics might be appropriate vehicles for engineering trustworthy robots. Mechanically checked proofs in such logics can serve to establish the permissibility (or obligatoriness) of agent actions, and such proofs, when translated into English, can also explain the rationale behind those actions. We use the logical framework Athena to encode a natural deduction system for a deontic logic recently proposed by Horty for reasoning about what agents ought to do. We present the syntax and semantics of the logic, discuss its encoding in Athena, and illustrate with an example of a mechanized proof. Introduction As machines assume an increasingly prominent role in our lives, there is little doubt that they will eventually be called upon to make important, ethically charged decisions. How can we trust that such decisions will be made on sound ethical principles? Some have claimed that such trust is impossible and that, inevitably, AI will produce robots that both have tremendous power and behave immorally (Joy 2000). These predictions certainly have some traction, particularly among a public that seems bent on paying good money to see films depicting such dark futures. But our outlook is a good deal more optimistic. We see no reason why the future, at least in principle, can’t be engineered to preclude doomsday scenarios of malicious robots taking over the world. One approach to the task of building well-behaved robots emphasizes careful ethical reasoning based on mechanized formal logics of action, obligation, and permissibility; that is the approach we explore in this paper. It is a line of research in the spirit of Leibniz’s famous dream of a universal moral calculus (Leibniz 1984): When controversies arise, there will be no more need for a disputation between two philosophers than there would be between two accountants [computistas]. It would be enough for them to pick up their pens and sit at their abacuses, and say to each other (perhaps having summoned a mutual friend): ‘Let us calculate.’ ∗We gratefully acknowledge that this research was in part supported by Air Force Research Labs (AFRL), Rome. Copyright c © 2005, American Association for Artificial Intelligence (www.aaai.org). All rights reserved. In the future we envisage, Leibniz’s “calculation” would boil down to formal proof and/or model generation in rigorously defined, machine-implemented logics of action and obligation. Such logics would allow for proofs establishing that: 1. Robots only take permissible actions; and 2. all actions that are obligatory for robots are actually performed by them (subject to ties and conflicts among available actions). Moreover, such proofs would be highly reliable (i.e., have a very small “trusted base”), and explained in ordinary English. Clearly, this remains largely a vision. There are many thorny issues, not least among which are criticisms regarding the practical relevance of such formal logics, efficiency issues in their mechanization, etc.; we will discuss some of these points shortly. Nevertheless, mechanized ethical reasoning remains an intriguing vision worth investigating. Of course one could also object to the wisdom of logicbased AI in general. While other ways of pursuing AI may well be preferable in certain contexts, we believe that in this case a logic-based approach (Bringsjord & Ferrucci 1998a; 1998b; Genesereth & Nilsson 1987; Nilsson 1991; Bringsjord, Arkoudas, & Schimanski forthcoming) is promising because one of the central issues here is that of trust—and mechanized formal proofs are perhaps the single most effective tool at our disposal for establishing trust. Deontic logic, agency, and action In standard deontic logic (Chellas 1980; Hilpinen 2001; Aqvist 1984), or just SDL, the formula ©P can be interpreted as saying that it ought to be the case that P, where P denotes some state of affairs or proposition. Notice that there is no agent in the picture, nor are there actions that an agent might perform. This is a direct consequence of the fact that SDL is derived directly from standard modal logic, which applies the possibility and necessity operators 3 and 2 to formulae standing for propositions or states of affairs. For example, the deontic logic D∗ has one rule of inference, viz.,
منابع مشابه
Praise, blame, obligation, and DWE: Toward a framework for classical supererogation and kin
Continuing prior work by the author, a simple classical system for personal obligation is integrated with a fairly rich system for aretaic (agent-evaluative) appraisal. I then explore various relationships between definable aretaic statuses such as praiseworthiness and blameworthiness and deontic statuses such as obligatoriness and impermissibility. I focus on partitions of the normative status...
متن کاملAutomated Reasoning for Robot Ethics
Deontic logic is a very well researched branch of mathematical logic and philosophy. Various kinds of deontic logics are considered for different application domains like argumentation theory, legal reasoning, and acts in multi-agent systems. In this paper, we show how standard deontic logic can be used to model ethical codes for multi-agent systems. Furthermore we show how Hyper, a high perfor...
متن کاملCzeżowski’s axiological concepts as full-fledged modalities
This short note provides a tentative formalization of Czeżowski’s ideas about axiological concepts: Good and Evil are conceived of as modalities rather than as predicates. A natural account of the resulting “ethical logic” appears to be very close to standard deontic logic. If one does not resolve to become an antirealist regarding moral values, a possible way out is to become a revisionist abo...
متن کاملContextual Deontic Cognitive Event Calculi for Ethically Correct Robots
However, as McNamara (2010) points out, it’s long been known that in light of “Kant’s Law” we then immediately face a contradiction, for in KTd, ` Oφ → ♦φ. This conditional is known as ‘Kant’s Law,’ and we call the reasoning involving it and Jones the ‘Kant’s-Law Paradox’ (K-LP). In light of this paradox, a machine or robot intended to operate as a morally competent banker overseeing Jones and ...
متن کاملDynamical formation control of wheeled mobile robots based on fuzzy logic
In this paper, the important formation control problem of nonholonomic wheeled mobile robots is investigated via a leader-follower strategy. To this end, the dynamics model of the considered wheeled mobile robot is derived using Lagrange equations of motion. Then, using ADAMS multi-body simulation software, the obtained dynamics of the wheeled system in MATLAB software is verified. After that, ...
متن کامل